首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3619篇
  免费   79篇
  国内免费   8篇
管理学   114篇
人口学   10篇
丛书文集   15篇
理论方法论   22篇
综合类   267篇
社会学   17篇
统计学   3261篇
  2023年   19篇
  2022年   2篇
  2021年   13篇
  2020年   62篇
  2019年   123篇
  2018年   155篇
  2017年   267篇
  2016年   96篇
  2015年   100篇
  2014年   116篇
  2013年   1275篇
  2012年   327篇
  2011年   88篇
  2010年   91篇
  2009年   97篇
  2008年   75篇
  2007年   69篇
  2006年   60篇
  2005年   77篇
  2004年   62篇
  2003年   55篇
  2002年   50篇
  2001年   46篇
  2000年   40篇
  1999年   45篇
  1998年   60篇
  1997年   32篇
  1996年   28篇
  1995年   23篇
  1994年   16篇
  1993年   15篇
  1992年   12篇
  1991年   10篇
  1990年   15篇
  1989年   11篇
  1988年   14篇
  1987年   3篇
  1986年   4篇
  1985年   10篇
  1984年   6篇
  1983年   12篇
  1982年   4篇
  1981年   3篇
  1980年   4篇
  1979年   3篇
  1978年   3篇
  1977年   2篇
  1976年   3篇
  1975年   2篇
  1973年   1篇
排序方式: 共有3706条查询结果,搜索用时 694 毫秒
1.
Abstract

Characterizing relations via Rényi entropy of m-generalized order statistics are considered along with examples and related stochastic orderings. Previous results for common order statistics are included.  相似文献   
2.
In this paper, we consider the deterministic trend model where the error process is allowed to be weakly or strongly correlated and subject to non‐stationary volatility. Extant estimators of the trend coefficient are analysed. We find that under heteroskedasticity, the Cochrane–Orcutt‐type estimator (with some initial condition) could be less efficient than Ordinary Least Squares (OLS) when the process is highly persistent, whereas it is asymptotically equivalent to OLS when the process is less persistent. An efficient non‐parametrically weighted Cochrane–Orcutt‐type estimator is then proposed. The efficiency is uniform over weak or strong serial correlation and non‐stationary volatility of unknown form. The feasible estimator relies on non‐parametric estimation of the volatility function, and the asymptotic theory is provided. We use the data‐dependent smoothing bandwidth that can automatically adjust for the strength of non‐stationarity in volatilities. The implementation does not require pretesting persistence of the process or specification of non‐stationary volatility. Finite‐sample evaluation via simulations and an empirical application demonstrates the good performance of proposed estimators.  相似文献   
3.
A conformance proportion is an important and useful index to assess industrial quality improvement. Statistical confidence limits for a conformance proportion are usually required not only to perform statistical significance tests, but also to provide useful information for determining practical significance. In this article, we propose approaches for constructing statistical confidence limits for a conformance proportion of multiple quality characteristics. Under the assumption that the variables of interest are distributed with a multivariate normal distribution, we develop an approach based on the concept of a fiducial generalized pivotal quantity (FGPQ). Without any distribution assumption on the variables, we apply some confidence interval construction methods for the conformance proportion by treating it as the probability of a success in a binomial distribution. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the simulated coverage probability (cp) for the FGPQ-based method is generally larger than the claimed value. On the other hand, one of the binomial distribution-based methods, that is, the standard method suggested in classical textbooks, appears to have smaller simulated cps than the nominal level. Two alternatives to the standard method are found to maintain their simulated cps sufficiently close to the claimed level, and hence their performances are judged to be satisfactory. In addition, three examples are given to illustrate the application of the proposed methods.  相似文献   
4.
We consider a method of moments approach for dealing with censoring at zero for data expressed in levels when researchers would like to take logarithms. A Box–Cox transformation is employed. We explore this approach in the context of linear regression where both dependent and independent variables are censored. We contrast this method to two others, (1) dropping records of data containing censored values and (2) assuming normality for censored observations and the residuals in the model. Across the methods considered, where researchers are interested primarily in the slope parameter, estimation bias is consistently reduced using the method of moments approach.  相似文献   
5.
Modeling spatial overdispersion requires point process models with finite‐dimensional distributions that are overdisperse relative to the Poisson distribution. Fitting such models usually heavily relies on the properties of stationarity, ergodicity, and orderliness. In addition, although processes based on negative binomial finite‐dimensional distributions have been widely considered, they typically fail to simultaneously satisfy the three required properties for fitting. Indeed, it has been conjectured by Diggle and Milne that no negative binomial model can satisfy all three properties. In light of this, we change perspective and construct a new process based on a different overdisperse count model, namely, the generalized Waring (GW) distribution. While comparably tractable and flexible to negative binomial processes, the GW process is shown to possess all required properties and additionally span the negative binomial and Poisson processes as limiting cases. In this sense, the GW process provides an approximate resolution to the conundrum highlighted by Diggle and Milne.  相似文献   
6.
7.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
8.
Simulation results are reported on methods that allow both within group and between group heteroscedasticity when testing the hypothesis that independent groups have identical regression parameters. The methods are based on a combination of extant techniques, but their finite-sample properties have not been studied. Included are results on the impact of removing all leverage points or just bad leverage points. The method used to identify leverage points can be important and can improve control over the Type I error probability. Results are illustrated using data from the Well Elderly II study.  相似文献   
9.
在瞬时波动率的各种估计量中,非参数估计量因其能准确地度量瞬时波动率,一直是学者们的研究热点。然而,这类估计量在实际应用中都面临着最优窗宽的确定问题。由于最优窗宽中往往携带一些难以估计的未知参数,使得在实际应用过程中确定最优窗宽的具体数值存在困难。本文以瞬时波动率的核估计量为例,借鉴非参数回归分析中窗宽选择的思想,构建了一种能从数据中准确计算出最优窗宽具体值的算法。理论的分析和数值上的验证表明:文中所构建的算法具有良好的稳定性、适应性和收敛速度。算法的提出为瞬时波动率的后续应用研究铺平道路。  相似文献   
10.
本文基于期望效用最大化和L1-中位数估计研究了在线投资组合选择问题。与EG(Exponential Gradient)策略仅利用单期价格信息估计价格趋势不同,本文将利用多期价格信息估计价格趋势,以提高在线策略的性能。首先,基于多期价格数据,利用L1-中位数估计得到预期价格趋势。然后,通过期望效用最大化,提出一个新的具有线型时间复杂度的在线策略,EGLM(Exponential Gradient via L1-Median)。并通过相对熵函数定义资产权重向量的距离,进而证明了EGLM策略具有泛证券投资组合性质。最后,利用国内外6个证券市场的历史数据进行实证分析,结果表明相较于UP(Universal Portfolio)策略和EG策略,EGLM策略有更好的竞争性能。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号